Goto

Collaborating Authors

 query cost









Multi-Information Source Optimization

Matthias Poloczek, Jialei Wang, Peter Frazier

Neural Information Processing Systems

This acquisition function generalizes and parallelizes previously proposed knowledge-gradient methods for single-IS Bayesian optimization [7, 8, 28, 26, 37] to MISO. We prove that this algorithm provides an asymptotically near-optimal solution.



Monitoring State Transitions in Markovian Systems with Sampling Cost

Saurav, Kumar, Shroff, Ness B., Liang, Yingbin

arXiv.org Machine Learning

We consider a node-monitor pair, where the node's state varies with time. The monitor needs to track the node's state at all times; however, there is a fixed cost for each state query. So the monitor may instead predict the state using time-series forecasting methods, including time-series foundation models (TSFMs), and query only when prediction uncertainty is high. Since query decisions influence prediction accuracy, determining when to query is nontrivial. A natural approach is a greedy policy that predicts when the expected prediction loss is below the query cost and queries otherwise. We analyze this policy in a Markovian setting, where the optimal (OPT) strategy is a state-dependent threshold policy minimizing the time-averaged sum of query cost and prediction losses. We show that, in general, the greedy policy is suboptimal and can have an unbounded competitive ratio, but under common conditions such as identically distributed transition probabilities, it performs close to OPT. For the case of unknown transition probabilities, we further propose a projected stochastic gradient descent (PSGD)-based learning variant of the greedy policy, which achieves a favorable predict-query tradeoff with improved computational efficiency compared to OPT.